Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split and cleanup client #9

Merged
merged 44 commits into from
Sep 3, 2024
Merged

Split and cleanup client #9

merged 44 commits into from
Sep 3, 2024

Conversation

f321x
Copy link
Owner

@f321x f321x commented Jul 21, 2024

This PR tries to split up the client logic into clearly separated parts to make the code more readable and easier to test and extend.

shell.nix Show resolved Hide resolved
common/src/nostr/mod.rs Outdated Show resolved Hide resolved
@f321x f321x marked this pull request as ready for review July 21, 2024 17:18
@f321x f321x requested a review from rodant July 21, 2024 17:19
Copy link
Owner Author

@f321x f321x left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All changes to 7e1b21a seem logical to me, the new (state machine) structure seems way cleaner / better separated. I think removing the mint url from the trade contract is also riskless.

@rodant
Copy link
Collaborator

rodant commented Aug 18, 2024

Looked at commits up to 85ce4d0, great NIP17 is working now. Compiles fine here and the Nostr communication works (tested up to communication step with mint). Maybe a (backoff?) retry mechanism if connecting to the mint fails makes sense, esp. for the coordinator? Code/changes look clean and make sense to me 👍

For the coordinator probably meanfull, but I think till now it doesn't communicate with the mint, it is only supposed to sign a proof in case of a dispute. I'd say let's keep an eye on this.

nostr_client: Arc<NostrClient>,
) -> anyhow::Result<RegisteredEscrowClient> {
let nostr_client_ref = nostr_client.clone();
let reg_msg_fut =
Copy link
Owner Author

@f321x f321x Aug 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why we need to spawn a new task to listen to the escrow msg and then await it afterwards. Couldn't we just send the contract message to the coordinator and then start listening to answers (receive_escrow_message) of the coordinator with a limit higher than 0 (e.g. 1) so we receive "old" messages? I guess there is a slight time benefit if we're already subscribed when expecting the coordinator msg but is it worth this additional complexity?

        let message_filter = Filter::new()
            .kind(Kind::GiftWrap)
            .pubkey(self.keys.public_key())
            .limit(1);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the maintainer of rust-nostr limit(0) is the way to go in case of GiftWrapped event and thus also for NIP-17. See the thread rust-nostr/nostr#173 (comment). Those events have a random created_at field which can be within 2 days in the past. If the client isn't listening before sending the contract, in some situations it misses the answer of the coordinator. I have observed this several times.

Copy link
Owner Author

@f321x f321x Sep 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you observed the client missing the events when subscribing after the event was published, did you test with Timestamp::now() or limit(0)? Just for my general understanding, I see now why Timestamp::now() doesn't work due to the random created_at, but in my understanding limit(1) should still work as it just tells the relay to return the last stored event. To make it reliable even in case there are other dms stored to the pubkey that could be returned due to the random created_at we could combine a timestamp - 2days + limit(higher number like 10), then use the newest event by the created_at in the wrapped rumor kind 14 event which should be correct and not random.

Like this for example:

    pub async fn receive_escrow_message(&self, timeout_secs: u64) -> anyhow::Result<String> {
        let message_filter = Filter::new()
            .kind(Kind::GiftWrap)
            .pubkey(self.keys.public_key())
            .since(Timestamp::now() - Duration::from_secs(172900))  // 2 days + 100 sec
            .limit(20);  // some limit

        let subscription_id = self.client.subscribe(vec![message_filter], None).await?.val;

        let mut notifications = self.client.notifications();

        let loop_future = async {
            loop {
                if let Ok(notification) = notifications.recv().await {
                    if let RelayPoolNotification::Event { event, .. } = notification {
                        let rumor = self.client.unwrap_gift_wrap(&event).await?.rumor;
// check if its a plausible timestamp
                        if rumor.kind == Kind::PrivateDirectMessage && rumor.created_at > Time::now() - Duration::from_secs(120) {
                            break Ok(rumor.content) as anyhow::Result<String>;
                        }
                    }
                }
            }
        };

    pub async fn register_trade(
        &self,
        nostr_client: Arc<NostrClient>,
    ) -> anyhow::Result<RegisteredEscrowClient> {
        let coordinator_pk = &self.escrow_contract.npubkey_coordinator;
        let contract_message = serde_json::to_string(&self.escrow_contract)?;
        dbg!("sending contract to coordinator...");
        nostr_client
            .client
            .send_private_msg(*coordinator_pk, &contract_message, None)
            .await?;

        let registration_message = nostr_client.receive_escrow_message(20).await?;
        let escrow_registration = serde_json::from_str(&registration_message)?;
        Ok(RegisteredEscrowClient {
            prev_state: self,
            escrow_registration,
        })
    }

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've observed this issue for a long time, when we had the since(now) filter and also after switching to limit(0). It happened when the buyer sends first the contract and the seller as second one. In this case the coordinator fires almost always the registration before the buyer starts listening for events. You can see this for example on master and in earlier stages of this branch. The fix to this by listening before sending the contract to the coordinator happened way later than the switch to nip-17. Now we still have a similar issue in a later step in case of the seller, it can start listening for the token after the buyer sent the token, I have also observed this. I think the approach with a higher limit isn't reliable enough and we need a different design. The nostr client must start listening from the very beginning and buffer relevant messages, then the escrow client can ask any time for messages with a new version of receive_escrow_message and pick the one it is looking for. It is an idea similar to an actor system where the nostr client acts as a mailbox for the escrow client. Note that in this approach we wouldn't limit the filter, limit(0) would keep listening for new events.

Copy link
Owner Author

@f321x f321x Sep 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could just split receiving in two parts. One function that subscribes to the filter and returns the subscription. And another function that pulls events from the subscription (.recv()) channel. So nostr-sdk is the buffer which seems more elegant than spawning tasks.

On the other side, Nostr clients like Amethyst also seem to be able to pull NIP17 events afterwards, i guess they just subscribe to NIP17 events, and then sort them by the created_at in the rumor.

I think we also should consider that traders are most probably not always online at the same time, and go offline between the single steps when waiting for the other party. So we may need to be able to continue at a specific step loading the state from somewhere. Opened issue #14

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like your idea of trying to use the nostr-sdk as event buffer, I'll try it out.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think Amethyst does a kind of logic as you mentioned.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I works as you proposed above, subscribing first at start up and then consuming on demand and relying only on the nostr-sdk. Now it is simpler, but I still must clean up a little bit. The escrow client can have a field for the nostr client again :)

Thanks for your comments and ideas, really helpful!

I found also a new problem, some times the escrow client expects a registration but it gets a token message and panics. I could extract this in a new issue, agree? If it doesn't happen (starting first the buyer), it runs successfully so far.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool👍
Yeah let's open an issue for this so we get this PR merged and can work on smaller issues in parallel.

Copy link
Collaborator

@rodant rodant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my point of view this PR can be merged.

@f321x f321x merged commit 7e15f9e into master Sep 3, 2024
@f321x f321x deleted the splitting_structures branch September 3, 2024 08:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants